(0 min)
Knowledge Graph Embedding (KGE) aims to learn representations for entities
and relations. Most KGE models have gained great success, especially on
extrapolation scenarios. Specifically, given an unseen triple (h, r, t), a
trained model can still correctly predict t from (h, r, ?), or h from (?, r,
t), such extrapolation ability is impressive. However, most existing KGE works
focus on the design of delicate triple modeling function, which mainly tells us
how to measure the plausibility of observed triples, but offers limited
explanation of why the methods can extrapolate to unseen data, and what are the
important factors to help KGE extrapolate. Therefore in this work, we attempt
to study the KGE extrapolation of two problems: 1. How does KGE extrapolate to
unseen data? 2. How to design the KGE model with better extrapolation ability?
For the problem 1, we first discuss the impact factors for extrapolation and
from relation, entity and triple level respectively, propose three Semantic
Evidences (SEs), which can be observed from train set and provide important
semantic information for extrapolation. Then we verify the effectiveness of SEs
through extensive experiments on several typical KGE methods. For the problem
2, to make better use of the three levels of SE, we propose a novel GNN-based
KGE model, called Semantic Evidence aware Graph Neural Network (SE-GNN). In
SE-GNN, each level of SE is modeled explicitly by the corresponding neighbor
pattern, and merged sufficiently by the multi-layer aggregation, which
contributes to obtaining more extrapolative knowledge representation. Finally,
through extensive experiments on FB15k-237 and WN18RR datasets, we show that
SE-GNN achieves state-of-the-art performance on Knowledge Graph Completion task
and performs a better extrapolation ability.